Attention, Perception, & Psychophysics
○ Springer Science and Business Media LLC
All preprints, ranked by how well they match Attention, Perception, & Psychophysics's content profile, based on 17 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Aagten-Murphy, D.; Szinte, M.; Deubel, H.
Show abstract
Visual objects that are present both before and after eye movements can act as landmarks, aiding localization of other visual stimuli. We investigated whether visual landmarks would also influence auditory localization - despite participants head position remaining unchanged. Participants made eye-movements from central fixation to a peripheral visual landmark, which either remained stationary or was covertly displaced. Following the movement, participants judged whether a stimulus (auditory or visual) was shifted in location relative to before the movement. Visual localization estimates shifted along with the landmark, although the landmark displacement itself went unnoticed. Interestingly, auditory localization estimates were also displaced. Thus, despite identical auditory input reaching the ears, two auditory stimuli originating from the same position were perceived as spatially distinct when the visual landmark moved. These results are consistent with the idea that auditory spatial information is encoded within an eye-centered reference frame and subject to spatial recalibration by visual landmarks. HighlightsO_LIVisual landmarks affect stimulus localization across eye movements C_LIO_LIWe show this also for auditory stimuli, even when the head remains stable C_LIO_LIDue to a visual landmark displacement, identical auditory stimuli are perceived as shifted C_LIO_LIThis suggests that auditory space is calibrated on eye-centered maps across saccades C_LI
Cavdan, M.; Drewing, K.; Doerschner, K.
Show abstract
The softness of objects can be perceived through several senses. For instance, to judge the softness of our cats fur, we do not only look at it, we also run our fingers in idiosyncratic ways through its coat. Recently, we have shown that haptically perceived softness covaries with the compliance, viscosity, granularity, and furriness of materials (Dovencioglu et al.,2020). However, it is unknown whether vision can provide similar information about the various aspects of perceived softness. Here, we investigated this question in an experiment with three conditions: in the haptic condition, blindfolded participants explored materials with their hands, in the visual-static condition participants were presented with close-up photographs of the same materials, and in the visual-dynamic condition participants watched videos of the hand-material interactions that were recorded in the haptic condition. After haptically or visually exploring the materials participants rated them on various attributes. Our results show a high overall perceptual correspondence between the three experimental conditions. With a few exceptions, this correspondence tended to be strongest between haptic and visual-dynamic conditions. These results are discussed with respect to information potentially available through the senses, or through prior experience, when judging the softness of materials.
Mihali, A. L.; Ma, W. J.
Show abstract
Visual search is one of the most ecologically important perceptual task domains. One research tradition has studied visual search using simple, parametric stimuli and a signal detection theory or Bayesian modeling framework. However, this tradition has mostly focused on homogeneous distractors (identical to each other), which are not very realistic. In a different tradition, Duncan and Humphreys (1989) conducted a landmark study on visual search with heterogeneous distractors. However, they used complex stimuli, making modeling and dissociation of component processes difficult. Here, we attempt to unify these research traditions by systematically examining visual search with heterogeneous distractors using simple, parametric stimuli and Bayesian modeling. Our experiment varied multiple factors that could influence performance: set size, task (N-AFC localization vs detection), whether the target was revealed before or after the search array (perception versus memory), and stimulus spacing. We found that performance robustly decreased with increasing set size. When examining within-trial summary statistics, we found that the minimum target-to-distractor feature difference was a stronger predictor of behavior than the mean target-to-distractor difference and than distractor variance. To obtain process-level understanding, we formulated a Bayesian optimal-observer model. This model accounted for all summary statistics, including when fitted jointly to localization and detection. We replicated these results in a separate experiment with reduced stimulus spacing. Together, our results represent a critique of Duncan and Humphreys descriptive approach, bring visual search with heterogeneous distractors firmly within the reach of quantitative process models, and affirm the "unreasonable effectiveness" of Bayesian models in explaining visual search.
Malik, A.; Yu, Y.; Boyaci, H.; Doerschner, K.
Show abstract
While research on the perception of line drawings has long demonstrated the importance of contours in object recognition, recent work shows that contours can also convey material properties. For example, even simple 2D shapes with varying contours have been shown to evoke vivid impressions of different materials (Pinna & Deiana, 2015). However, such static representations capture only a single moment in time. When a material moves, its contours shift, evolve, or deform over time, creating contour motion. Does this contour motion convey diagnostic information about material properties, independent of surface appearance? Existing studies on the role of dynamic cues in material perception either use fully rendered 3D stimuli, where contour motion is confounded with rich surface information, or motion-only displays (dynamic dot stimuli or noise patches), which eliminate surface cues but also lack clearly defined contours. As a result, the relative contribution of contour motion to material perception remains unclear. To address this gap, we measured how human observers perceive materials from dynamic line drawings ("line"), compared to animations of fully textured stimuli that carry optical and motion information ("full"), as well as dynamic dot stimuli ("dot"). Stimuli were rendered versions (full, dot, line) of material animations from five material categories (jelly, liquid, smoke, fabric, and rigid-breakable). In one experiment, participants rated five material attributes (dense, flexible, wobbly, fluid, airy motion), and in a second experiment, participants were asked to choose one of the two materials that is more similar to a third material across all possible combinations. Results from both experiments consistently reveal that 1) Dynamic line drawings vividly convey mechanical material properties, and 2) the similarity in material judgments between line and full conditions was larger than that between dot and full conditions. We conclude that contour motion carries rich information about mechanical material qualities.
Palmer, J.; White, A. L.; Moore, C. M.; Boynton, G. M.
Show abstract
How well can one perceive simultaneous stimuli at two widely spaced visual locations? Are the stimuli processed independently? If not, does the dependency affect perception, disrupt signals in later stages, or both? To address these questions, we measured effects of divided attention using a dual-task paradigm with stimuli presented in noise on either side of fixation. This paradigm was applied to detecting Gabor patches and to the semantic categorization of words. We measured dual-task deficits which are a decline in mean performance for a dual task compared to a single task. There was such a deficit for categorizing two words but relatively little deficit for detecting two Gabors. We also measured congruency effects which are when performance at one location depends on whether the stimulus at the other location requires the same response. There was such a congruency effect for detecting two Gabors but relatively little congruency effect for categorizing two words. Further experiments were consistent with the dual-task deficit in word categorization being perceptual, but the congruency effect in Gabor detection being due to later processes. Results of additional experiments showed that the congruency effect was consistent with either graded selection errors or with all-or-none selection followed by graded interactive processing. To answer our opening question: for Gabor detection, perceptual processes were largely independent but later processes caused congruency effects; for word categorization, perceptual processes had capacity limits but even in combination with later processes caused relatively little congruency effects. In summary, there was evidence for two different kinds of dependency. Such complementary dependencies are inconsistent with theories of divided attention that depend on a single dependency such as a single resource or single source of interactive processing.
Linton, P.
Show abstract
Since Kepler (1604) and Descartes (1637), its been suggested that vergence (the angular rotation of the eyes) plays a key role in size constancy. However, this has never been tested divorced from confounding cues such as changes in the retinal image. In our experiment participants viewed a target which grew or shrank over 5 seconds. At the same time the fixation distance specified by vergence was reduced from 50cm to 25cm. The question was whether the reduction in the viewing distance specified by vergence biased the participants judgements of whether the target grew or shrank? We found no evidence of any bias, and therefore no evidence that eye movements affect perceived size. If this is correct, then this finding has three implications: First, perceived size is much more reliant on cognitive influences than previously thought. This is consistent with the argument that visual scale is purely cognitive in nature (Linton, 2017; 2018). Second, it leads us to question whether the vergence modulation of V1 contributes to size constancy. Third, given the interaction between vergence, proprioception, and the retinal image in the Taylor illusion, it leads us to ask whether this cognitive approach could also be applied to multisensory integration.
Moreland, J. C.; Palmer, J.; Boynton, G. M.
Show abstract
Set-size effects in change detection is often used to investigate the capacity limits of dividing attention. Such capacity limits have been attributed to a variety of processes including perception, memory encoding, memory storage, memory retrieval, comparison and decision. In this study, we investigated the locus of the effect of increasing set size from 1 to 2. To measure purely attentional effects and not other phenomena such as crowding, a precue was used to manipulate relevant set size and keep the display constant across conditions. The task was to detect a change in the orientation of 1 or 2 Gabor patterns. The locus of the capacity limits was determined by varying when observers were cued to the only stimulus that was relevant. We began by measuring the baseline set-size effect in an initial experiment. In the next experiment, a 100% valid postcue was added to test for an effect of decision. This postcue did not change the set-size effects. In the critical experiments, a 100% valid cue was provided during the retention interval between displays, or only one stimulus was presented in the second display (local recognition). For both of these conditions, there was little or no set-size effect. This pattern of results was found for both hard-to-discriminate stimuli typical of perception experiments and easy-to-discriminate stimuli typical of memory experiments. These results are consistent with capacity limits in memory retrieval, and/or comparison. For these set sizes, the results are not consistent with capacity limits in perception, memory encoding or memory storage. Significance SectionThe change detection paradigm is often used to demonstrate effects of divided attention. But it is not clear whether these effects are due to perception, memory, or judgment and decision. In this article, we present new evidence that the divided attention effect in change detection is due to limits in memory retrieval or comparison processes. These results are not consistent with limits in perception, memory encoding or memory storage.
't Hart, B. M.; Cavanagh, P.
Show abstract
When two probes are flashed at different times within a moving frame they can be perceived as dramatically separated from each other even though they are at the same location in the display. This effect suggests that we perceive object position relative to the surrounding frame even when it is moving (Ozkan et al., 2021). Here, 8 experiments reveal new properties of this frame effect. First, the influence of the frame on the perceived probe positions extends beyond its bounding contours by several degrees of visual angle, both in the direction of the frames motion and orthogonal to it. It is also undiminished when the probes and the frame are in different depth planes. However, the influence of the frames motion shows no extension in time - there is no effect on probes presented after the frame is removed and none retroactively before the frame appears either. The frame effect is also driven primarily by the displacement of the frame, not by its motion signals: the effect is stronger for moving bounded frames compared to moving, unbounded random-dot textures. When the bounded region has an internal texture that moves with or against the frames motion or remains static, it is the displacement of the frame that produces the perceived position shifts of the probes, while the effect of the internal motion is mostly suppressed. The frames influence is unaffected by whether the motion is self-initiated or not and does not reduce in strength across 2 hours of testing.
Pilipenko, A.; Samaha, J.; Nukala, V.; De La Torre, J.
Show abstract
A major distinction in early visual processing is the magnocellular (MC) and parvocellular (PC) pathways. The MC pathway preferentially processes motion, transient events, and low spatial frequencies, while the PC pathway preferentially processes color, sustained events, and high spatial frequencies. Prior work has theorized that the PC pathway more strongly contributes to conscious object recognition via projections to the ventral "what" visual pathway, whereas the MC pathway underlies non-conscious, action-oriented motion and localization processing via the dorsal stream "where/how" pathway. This invites the question: Are we equally aware of activity in both pathways? And if not, do task demands interact with which pathway is more accessible to awareness? We investigated this question in a set of two studies measuring participants metacognition for stimuli biased towards MC or PC processing. The "Steady/Pulsed Paradigm" presents brief stimuli under two conditions thought to favor either pathway. In the "pulsed" condition, the target appears atop a strong luminance pedestal which theoretically saturates the transient MC response and leaves the PC pathway to process the stimulus. In the "steady" condition, the stimulus is identical except the luminance pedestal is constant throughout the trial, rather than flashed alongside the target. This theoretically adapts the PC neurons and leaves MC for processing. Experiment 1 was a spatial localization task thought to rely on information relayed from the MC pathway. Using both a model-based and model-free approach to quantify participants metacognitive sensitivity to their own task performance, we found greater metacognition in the steady (MC-biased) condition. Experiment 2 was a fine-grained orientation-discrimination task more reliant on PC pathway information. Our results show an abolishment of the MC pathway advantage seen in Experiment 1 and suggest that the metacognitive advantage for MC processing may hold for stimulus localization tasks only. More generally, our results highlight the need to consider the possibility of differential access to low-level stimulus properties in studies of visual metacognition
Chota, S.; Arora, K.; Kenemans, L.; Gayet, S.; Van der Stigchel, S.
Show abstract
Previous work has suggested that small directional eye movements not only reveal the focus of external spatial attention towards visible stimuli, but also accompany shifts of internal attention to stimuli in visual working memory (VWM)(van Ede et al., 2019). When the orientations of two bars are memorized and a subsequent retro-cue indicates which orientation needs to be reported, participants gaze is systematically biased towards the former location of the cued item (Figure 1AB). This finding was interpreted as evidence that the oculomotor system indexes internal attention; that is, attention directed at the location of stimuli that are no longer presented but are maintained in VWM. Importantly, as the location of the bars is presumably not relevant to the memory report, the authors concluded that orientation features in VWM are automatically associated with locations, suggesting that VWM is inherently spatially organized. This conclusion depends on the key assumption that participants indeed memorize and subsequently attend orientation features. Here we re-analyse Experiment 1 by van Ede et al. (2019) and demonstrate that this assumption does not hold. Instead of memorizing orientation features, participants deployed an alternative spatial strategy by memorizing bar endpoints. Although we do not call into question the conclusion that internal attention is inherently spatially organized, our results do imply that directional gaze biases might also reflect attention directed at task-relevant stimulus endpoints, rather than internal attention directed at memorized orientations. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=161 SRC="FIGDIR/small/610231v2_fig1.gif" ALT="Figure 1"> View larger version (43K): org.highwire.dtl.DTLVardef@940e51org.highwire.dtl.DTLVardef@37ec3dorg.highwire.dtl.DTLVardef@176b186org.highwire.dtl.DTLVardef@180e8a7_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 1.C_FLOATNO Gaze density maps from Experiment 1 by van Ede et al. (2019) (N = 23, trials included = 20.864, 400 to 1000 ms). AB. Original reported effect of cued item location on gaze bias. Calculated by subtracting cued-item-left and cued-item-right gaze density maps. Rectangles indicate used stimulus positions and orientation ranges (min: 20{degrees}, mean: 45{degrees}, max: 70{degrees}; min: 110{degrees}, mean: 135{degrees}, max: 160{degrees}) of bar stimuli. C. Normalized Gaze bias vectors per condition (red dotted lines), horizontal vectors (dotted black lines) and average vectors pointing towards most foveal bar endpoints (solid black lines). Gaze bias vector endpoints were calculated from the centre of mass of each condition, ignoring negative values. Circular t-tests revealed that individual gaze bias vector angles (red dotted lines) were significantly different from horizontal vectors (dotted black lines) but not significantly different from endpoint vectors (solid black lines). FI. Vertical gaze bias revealed by separating trials based on bar orientations. Red dotted lines depict group average gaze bias vectors. F. Both bar endpoints "upwards" (left: 20{degrees} to 70{degrees} right: 110{degrees} to 160{degrees}) minus both bars endpoints "downwards" (left: 110{degrees} to 160{degrees}, right: 20{degrees} to 70{degrees}). I. Both "downwards" minus both "upwards". DEGH. Individual gaze density maps for each attention (left versus right) and bar endpoint direction (upwards versus downwards) separately. Solid black Lines show average vector pointing towards closest 45{degrees}/135{degrees} bar endpoint (i.e., average optimal gaze location for solving the memory task through memory maintenance of a spatial location). Red dotted lines depict group average gaze bias vectors (calculated from the centre of mass of each condition, ignoring negative values). C_FIG
Clevenger, J.; Yang, P.-L.; Beck, D. M.
Show abstract
Over the years a number of researchers have reported enhanced performance of targets located horizontally to a cued location relative to those located vertically. However, many of these reports could stem from a known meridian asymmetry in which stimuli on the horizontal meridian show a performance advantage relative to those on the vertical meridian. Here we show a horizontal advantage for target and cue locations that reside outside the zone of asymmetry; that is, targets that appear horizontal to the cue, but above or below the horizontal meridian, are more accurate than those that appear vertical to the cue, but again either above or below the horizontal meridian (Experiments 1 and 4). This advantage does not extend to non-symmetrically located targets in the opposite hemifield (Experiment 2), nor horizontally located targets within the same hemifield (Experiment 3). These data raise the possibility that display designs in which the target and cue locations are positioned symmetrically across the vertical midline may be underestimating the cue validity effect.
von Mohr, M.; Kirsch, L. P.; Loh, J. K.; Fotopoulou, A.
Show abstract
Touch can give rise to different sensations including sensory, emotional and social aspects. Tactile pleasure typically associated with caress-like skin stroking of slow velocities (1-10 cm/s) has been hypothesised to relate to an unmyelinated, slow-conducting C-tactile afferent system (CT system), developed to distinguish affective touch from the noise of other tactile information on hairy skin (the so-called social touch hypothesis). However, to date, there is no psychometric examination of the discriminative and metacognitive processes that contribute to accurate awareness of pleasant touch stimuli. Over two studies (total N= 194), we combined for the first time CT stimulation with signal detection theory and metacognitive measurements to assess the social touch hypothesis on the role of the CT system in affective touch discrimination. Participants ability to accurately discriminate pleasantness of tactile stimuli of different velocities, as well as their response bias, was assessed using a force-choice task (high versus low pleasantness response) on two different skin sites: forearm (CT-skin) and palm (non-CT skin). We also examined whether such detection accuracy was related to the confidence in their decision (metacognitive sensitivity). Consistently with the social touch hypothesis, we found higher sensitivity d on the forearm versus the palm, indicating that people are better at discriminating between stimuli of high and low tactile pleasantness on a skin site that contains CT afferents. Strikingly, we also found more negative response bias on the forearm versus the palm, indicating a tendency to experience all stimuli on CT-skin as high-pleasant, with such effects depending on order, likely to be explained by prior touch exposure. Finally, we found that people have greater confidence in their ability to discriminate between affective touch stimuli on CT innervated skin than on non-CT skin, possibly relating to the domain specificity of CT touch hence suggesting a domain-specific, metacognitive hypothesis that can be explored in future studies as an extension of the social touch hypothesis.\n\nHighlightsO_LITouch mediated by C-tactile (CT) afferents on hairy skin elicits pleasant sensations\nC_LIO_LIWe combine for the first time CT stimulation with signal detection theory\nC_LIO_LIBetter accuracy to detect pleasantness of tactile stimuli at CT optimal speeds on CT skin\nC_LIO_LIHigher confidence in ability to accurately distinguish affective touch on CT skin\nC_LI
Schroeger, A.; Merz, S.
Show abstract
Human perception of space, time and motion is subject to several biases. Lab studies showed such effects in psychophysical judgements of location, but also in action-tasks, like predicting motion and intercepting. Given that similar underlying processes have been proposed for some of these biases, we tested for a shared mechanism by correlating them across observers. Using the classical implied motion sequence, participants either indicated the remembered location of an intermittently presented dot consistently moving from one location to the next, or intercepted a predicted future location of the same intermittently presented dot. We examined whether the errors in those tasks are associated by correlating i) the overall amount of overshooting, and ii) the effect of temporal manipulations of the jump duration on these biases across participants. We found two medium correlations indicating that these two biases are indeed related to each other. Participants who show a larger effect in one task also show a larger effect in the other task, and participants with larger proneness to temporal features show them consistently across both tasks. This suggests a shared underlying mechanism, and theoretical implications are discussed.
Liang, W.; Noyce, A. L.; Brown, C. A.; Shinn-Cunningham, B. G.
Show abstract
Task-irrelevant features can impact formation of auditory objects and influence the effectiveness of selective attention, including the buildup of attention over time. Using a previously established paradigm exploring the effects of random interruptions on spatial selective attention, this study explores how the task-irrelevant feature of talker identity impacts the buildup of spatial attention and whether it alters the impact of interruptions. Participants performed a sequence recall task in which participants were presented with two competing syllable sequences coming from different spatial directions and were asked to report the syllable sequence coming from the target direction. On half of the trials, an unpredictable, novel interrupting sound occurred, disrupting attentional focus. Two experiments explored how talker influenced performance, specifically, whether 1) making the two streams come from different talkers facilitates task performance and reduces the impact of interruption compared to when the streams are spoken by the same talker, and 2) talker discontinuity interferes with attention buildup and harms syllable recall performance compared to when the talker is the same from one syllable to the next. Our results showed that distinct talker features, though task-irrelevant in this spatial task, significantly improved syllable recall performance and reduced the impact of interrupters. Further, irrelevant talker discontinuities damaged attention buildup and reduced syllable recall performance.
Wang, G.; Alais, D.
Show abstract
Spatial frequency is a fundamental feature in both visual and somatosensory perception, yet how these modalities integrate spatial frequency information remains unclear. This study investigates whether visuotactile spatial frequency perception follows the principles of Maximum Likelihood Estimation (MLE) and whether spatial proximity influences multisensory integration, using virtual reality (VR) and high-precision 3D-printed tactile stimuli. Experiment 1 found that the visuotactile integration of spatial frequency cues follows the MLE rule. However, Experiment 2 revealed that this integration is not affected by spatial proximity. These findings provide additional insights into the feature dependency of multisensory integration between vision and touch and highlight the potential for independent processing before integration, offering new perspectives on the mechanisms of spatial frequency processing in both visual and tactile modalities.
Kallmayer, A.; Vo, M. L.-H.; Draschkow, D.
Show abstract
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from "accidental" viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0{degrees}, 60{degrees}, 120{degrees}, 180{degrees} 240{degrees}, 300{degrees}) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0{degrees}-rotation) and non-canonical (120{degrees}-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Alais, D.; Folgueiras, U. F.; Leung, J.
Show abstract
We report behavioural findings implying common motion processing for auditory and visual motion. We presented brief translational motion stimuli drifting leftwards or rightwards in the visual or auditory modality at various speeds. Observers made a speed discrimination on each trial, comparing current speed against mean speed (i.e., method of single stimuli). Data were compiled into psychometric functions and means and slopes compared. Slopes between auditory and visual motion were identical, consistent with a common noise source, although mean speed for audition was veridical while visual speeds were significantly underestimated. An inter-trial analysis revealed clear motion priming in both audition and vision (i.e., faster perceived speed after a fast preceding speed, and vice versa - a positive serial dependence). Plotting priming as a function of preceding speed revealed the same slope for each modality. We also tested whether motion priming was modality specific. Whether vision preceded audition, or audition preceded vision, a positive serial bias (i.e., priming) was always observed. We conclude a common process underlies auditory and visual motion, and that this explains the closely matched data in vision and audition, as well as the crossmodal data showing equivalent motion priming regardless of the preceding trials modality.
Chapman, A. F.; Allouche, M.; Denison, R. N.
Show abstract
Department of Psychological and Brain Sciences, Boston University Our perception of the world is transformed by attention, both in terms of the efficiency of information processing and the appearance of attended stimuli. A standard theory is that attention regulates how competing stimuli vie for resources. However, an alternative perspective is that attention alters the representational geometry of stimulus spaces, such that changes in processing are not isolated to the particular competing stimuli, but are reflected across the entire perceptual space. To test this representational hypothesis, we conducted an experiment in which participants reported the perceived similarity of orientations spanning the full stimulus space when attention was directed or not directed to specific orientations. We used these similarity judgments to measure the representational geometry of orientation, finding that attention reliably expanded the representational space in a narrow range around the attended orientations. We also found evidence for compression of the representational space in a broad range around unattended orientations. Our findings support the idea that attention acts to reshape the representation of entire perceptual spaces in a way that supports processing of relevant stimulus features. By simultaneously manipulating attention while measuring perceptual similarity, our methodological framework opens the door for future work investigating the interaction between cognitive and perceptual processes from the perspective of representational geometry.
Samaha, J.; Denison, R.
Show abstract
Confidence in a perceptual decision is a subjective estimate of the accuracy of ones choice. As such, confidence is thought to be an important computation for a variety of cognitive and perceptual processes, and it features heavily in theorizing about conscious access to perceptual states. Recent experiments have revealed a "positive evidence bias" (PEB) in the computations underlying confidence reports. A PEB occurs when confidence, unlike objective choice, over-weights the evidence for the chosen option, relative to evidence against the chosen option. Accordingly, in a perceptual task, appropriate stimulus conditions can be arranged that produce selective changes in confidence reports but no changes in accuracy. Although the PEB is generally assumed to reflect the observers perceptual and/or decision processes, post-decisional accounts have not been ruled out. We therefore asked whether the PEB persisted under novel conditions that eliminated two possible post-decisional accounts: 1) post-decision evidence accumulation that contributes to a confidence report solicited after the perceptual choice, and 2) a memory bias that emerges in the delay between the stimulus offset and the confidence report. We found that even when the stimulus remained on the screen until observers responded, and when observers reported their choice and confidence simultaneously, the PEB still emerged. Signal detection-based modeling also showed that the PEB was not associated with changes to metacognitive efficiency, but rather to confidence criteria. We conclude that once-plausible post-decisional accounts of the PEB do not explain the bias, bolstering the idea that it is perceptual or decisional in nature.
Siedlecka, M.; Paulewicz, B.; Koculak, M.
Show abstract
Studies on confidence in decision-making tasks have repeatedly shown correlations between confidence and the characteristics of motor responses. Here, we show the results of two experiments in which we manipulated the type of motor response that precedes confidence rating. Participants decided which box, left or right, contained more dots and then reported their confidence in this decision. In Experiment 1, prior to confidence rating, participants were required to follow a motor cue. Cued-response type was manipulated in two dimensions: task-compatibility (the relation between response set and task-relevant decision alternatives), and stimulus-congruence (spatial correspondence between response key and the location of the stimulus that should be chosen). In Experiment 2, a decision-related response set was randomly varied in each trial, being either vertical (task incompatible) or horizontal (task-compatible, spatially congruent and incongruent). The main results showed that choice confidence increased following task-compatible responses, i.e. responses related to the alternatives of the choice in which confidence was reported. Moreover, confidence was higher in these conditions, independently of response accuracy and spatial congruence with the correct stimuli. We interpret these results as suggesting that action appropriate in the context of a given task is an indicator of successful completion of the decision-related process. Such an action, even a spurious one, inflates decisional confidence.